17 research outputs found
Coordinated Per-Antenna Power Minimization for Multicell Massive MIMO Systems with Low-Resolution Data Converters
A multicell-coordinated beamforming solution for massive multiple-input
multiple-output orthogonal frequency-division multiplexing (OFDM) systems is
presented when employing low-resolution data converters and per-antenna level
constraints. For a more realistic deployment, we aim to find the downlink (DL)
beamformer that minimizes the maximum power on transmit antenna array of each
basestation under received signal quality constraints while minimizing
per-antenna transmit power. We show that strong duality holds between the
primal DL formulation and its manageable Lagrangian dual problem which can be
interpreted as the virtual uplink (UL) problem with adjustable noise covariance
matrices. For a fixed set of noise covariance matrices, we claim that the
virtual UL solution is effectively used to compute the DL beamformer and noise
covariance matrices can be subsequently updated with an associated subgradient.
Our primary contributions are then (1) formulating the quantized DL OFDM
antenna power minimax problem and deriving its associated dual problem, (2)
showing strong duality and interpreting the dual as a virtual quantized UL OFDM
problem, and (3) developing an iterative minimax algorithm based on the dual
problem. Simulations validate the proposed algorithm in terms of the maximum
antenna transmit power and peak-to-average-power ratio.Comment: submitted for possible IEEE journal publicatio
Quantitative evaluation and reversion analysis of the attractor landscapes of an intracellular regulatory network for colorectal cancer
The molecular profiles of CMS cancer cells, statistical significance analysis of reversion targets, and synergistic effect analysis of every two nodes inhibition. (XLSX 67Â kb
Recommended from our members
Towards power-efficient and intelligent wireless communication systems
With the growing demand for higher date rates and more reliable service capabilities, wireless communication systems continue to grow in popularity and importance. In order to enable higher data rate via broader bandwidth, millimeter wave (mmWave) systems are deployed for modern and future communication systems. Due to the high transmission loss of the mmWave frequency bands, a massive number of antennas are employed to focus transmitted power in narrow radio frequency (RF) beams. However, associating one RF chain with two high-resolution data converters for each antenna element would consume a prohibitively large amount of power. Furthermore, challenging service requirements can be handled by machine learning techniques in a variety of application spaces. The goal of this dissertation is to propose communication systems that are not only reliable and high-performing, but also power-efficient as well as intelligent. Two possible ways to alleviate the huge power consumption problem are 1) low-resolution data converters, and 2) hybrid analog-digital beamforming architectures since the former tries to reduce the power consumption of each individual RF chain and the latter directly scales down the number of RF chains. Additionally, intelligent communication systems that can adapt to changing network conditions and user requirements are crucial for ensuring reliable and efficient communication. In either case, these solutions introduce severe non-convexity and non-linearity to the entire system. In this regard, I propose new solutions that can respond to future communication systems requiring a fundamental re-design of current communication systems based on a power-efficient and intelligent framework. First, I investigate a coordinated multipoint (CoMP) beamforming and power control problem for base stations (BSs) with a massive number of antenna arrays under coarse quantization by low-resolution analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). I first formulate total power minimization problems of both uplink (UL) and downlink (DL) systems subject to signal-to-quantization-plue-interference-and-noise ratio (SQINR) constraints. I then show strong duality for the UL and DL problems under the coarse quantization condition when channel reciprocity holds with time-division duplexing (TDD) assumption. Leveraging the duality, I propose a framework that is directed toward a twofold aim: to discover the optimal transmit powers in UL by developing iterative algorithm in a distributed manner and to obtain the optimal precoder in DL as a scaled instance of UL combiner. Under homogeneous transmit power and SQINR constraints per cell, I further derive a deterministic solution for the UL CoMP problem by analyzing the lower bound of the SQINR. Lastly, I extend the derived result to wideband orthogonal frequency-division multiplexing (OFDM) systems to optimize transmit power and beamformer for all subcarriers. Simulation results validate the theoretical results and proposed algorithms in terms of total transmit power, duality gap, and convergence. Second, I aim to find the DL beamformer that minimizes the maximum power on transmit antenna array of each BS under received SQINR constraints while minimizing per-antenna transmit power for a more realistic deployment. I first formulate formulating the quantized DL OFDM antenna power minimax problem and deriving its associated dual problem. With proving strong duality, I use the associated UL dual solution to compute the DL beamformer. Subsequently, the DL beamformer is used in updating the covariance matrix of the uplink noise signals. The series of processes builds an efficient algorithm to find a numerical solution. Simulations validate the proposed algorithm in terms of the maximum antenna transmit power and peak-to-average-power ratio. Third, I propose a learning-based maximum likelihood detection framework with an acceptable learning length for uplink massive multiple-input-multiple-output (MIMO) systems with one-bit ADCs. The learning-based detection only requires counting the occurrences of the quantized outputs at each antenna. The learning in the high signal-to-noise ratio (SNR) regime, however, needs excessive training to estimate the extremely small likelihood probabilities. To address this drawback, I utilize a dithering signal to artificially decrease the SNR and then remove the impact of the dithering noise via post processing. I evolve the technique by developing an adaptive dither-and-learning method that updates the dithering power according the patterns observed in the quantized dithered signals. Lastly, the computed likelihood probabilities are utilized in deriving log-likelihood ratio to enable state-of-the-art channel coding schemes. I compare the uncoded and coded detection performance of the proposed algorithm with other learning-based frameworks and show that the proposed algorithm shows the performance closest to optimal performance. Fourth, I propose a deep reinforcement learning (DRL)-based solution for joint hybrid beamforming (HB) and power control problems when multiple massive MIMO BSs are communicating with multiple users in the uplink mmWave band. The HB method requires both digital and analog beamformers, with the latter using discrete phase shifters to project high-dimensional antenna ports to low-dimensional logical ports and scale down the number of RF chains. However, this results in non-convexity, making the problem difficult to solve using existing algorithms. In multicell uplink communication systems, I aim to jointly design the HB at each BS and transmit power control of the associated users while ensuring that the received signal-to-interference-and-noise ratio (SINR) constraints are satisfied. Considering the use of the DRL-based approach and the primal problem, I formulate the RL basics. To handle the combination of discrete and continuous inputs, I use the DDPG RL algorithm, which outputs a valid action that maps to the design factors. In particular, I aim to control each phase shifter individually by introducing an intermediate vector and applying a differentiable argmax function to estimate the phase angle index. The proposed method is evaluated through simulation results based on the achieved SINR. The four contributions could make a worthwhile enhancement to the development of power-efficient and intelligent wireless communication systems by meeting the communication needs of modern society while minimizing energy consumption and maximizing the use of available resources.Electrical and Computer Engineerin
Quantized Massive MIMO Systems With Multicell Coordinated Beamforming and Power Control
In this paper, we investigate a coordinated multipoint (CoMP) beamforming and power control problem for base stations (BSs) with a massive number of antenna arrays under coarse quantization at low-resolution analog-to-digital converters (ADCs) and digital-to-analog converter (DACs). Unlike high-resolution ADC and DAC systems, non-negligible quantization noise that needs to be considered in CoMP design makes the problem more challenging. We first formulate total power minimization problems of both uplink (UL) and downlink (DL) systems subject to signal-to-interference-and-noise ratio (SINR) constraints. We then derive strong duality for the UL and DL problems under coarse quantization systems. Leveraging the duality, we propose a framework that is directed toward a twofold aim: to discover the optimal transmit powers in UL by developing iterative algorithm in a distributed manner and to obtain the optimal precoder in DL as a scaled instance of UL combiner. Under homogeneous transmit power and SINR constraints per cell, we further derive a deterministic solution for the UL CoMP problem by analyzing the lower bound of the SINR. Lastly, we extend the derived result to wideband orthogonal frequency-division multiplexing systems to optimize transmit power and beamformer for all subcarriers. Simulation results validate the theoretical results and proposed algorithms
Robust learning-based ML detection for massive MIMO systems with one-bit quantized signals
In this paper, we investigate learning-based maximum likelihood (ML) detection for uplink massive multiple-input and multiple-output (MIMO) systems with one-bit analog- to-digital converters (ADCs). To overcome the significant dependency of learning-based detection on the training length, we propose two one-bit ML detection methods: a biased-learning method and a dithering-and-learning method. The biased-learning method keeps likelihood functions with zero probability from wiping out the obtained information through learning, thereby providing more robust detection performance. Extending the biased method to a system with knowledge of the received signal-to-noise ratio, the dithering-and- learning method estimates more likelihood functions by adding dithering noise to the quantizer input. The proposed methods are further improved by adopting the post likelihood function update, which exploits correctly decoded data symbols as training pilot symbols. The proposed methods avoid the need for channel estimation. Simulation results validate the detection performance of the proposed methods in symbol error rate. ?? 2019 IEEE
Adaptive Learning-Based Detection for One-Bit Quantized Massive MIMO Systems
We propose an adaptive learning-based framework for uplink massive multiple-input multiple-output (MIMO) systems with one-bit analog-to-digital converters. Learning-based detection does not need to estimate channels, which overcomes a key drawback in one-bit quantized systems. During training, learning-based detection suffers at high signal-to-noise ratio (SNR) because observations will be biased to +1 or -1 which leads to many zero-valued empirical likelihood functions. At low SNR, observations vary frequently in value but the high noise power makes capturing the effect of the channel difficult. To address these drawbacks, we propose an adaptive dithering-and-learning method. During training, received values are mixed with dithering noise whose statistics are known to the base station, and the dithering noise power is updated for each antenna element depending on the observed pattern of the output. We then use the refined probabilities in the one-bit maximum likelihood detection rule. Simulation results validate the detection performance of the proposed method vs. our previous method using fixed dithering noise power as well as zero-forcing and optimal ML detection both of which assume perfect channel knowledge.</p
Fully Distributed Multicast Routing Protocol for IEEE 802.15.8 Peer-Aware Communication
The IEEE 802.15.8 provides peer-aware communication (PAC) protocol for peer-to-peer infrastructureless service with fully distributed coordination. One of the most promising services in IEEE 802.15.8 is group multicast communication with simultaneous membership in multiple groups, typically up to 10 groups, in a dense network topology. Most of the existing multicast techniques in mobile ad hoc networks (MANET) have significant overhead for managing the multicast group and thus cannot be used for fully distributed PAC networks. In this paper, we propose a light-weight multicast routing protocol referred to as a fully distributed multicast routing protocol (FDMRP). The FDMRP minimizes routing table entries and thus reduces control message overhead for its multicast group management. To balance the control message, all nodes in the network have a similar number of routing entries to manage nodes in the same multicast group. To measure the effectiveness of the proposed FDMRP against the existing schemes, we evaluated performance by OPNET simulator. Performance evaluation shows that the FDMRP can reduce the number of routing entries and control message overhead by up to 85% and 95%, respectively, when the number of nodes is more than 500
Pattern-Identified Online Task Scheduling in Multitier Edge Computing for Industrial IoT Services
In smart manufacturing, production machinery and auxiliary devices, referred to as industrial Internet of things (IIoT), are connected to a unified networking infrastructure for management and command deliveries in a precise production process. However, providing autonomous, reliable, and real-time offloaded services for such a production is an open challenge since these IIoT devices are assumed lightweight embedded platforms with limited computing performance. In this paper, we propose a pattern-identified online task scheduling (PIOTS) mechanism for the networking infrastructure, where multitier edge computing is provided, in order to handle the offloaded tasks in real time. First, historical IIoT task patterns in every timeslot are used to train a self-organizing map (SOM), which represents the features of the task patterns within defined dimensions. Consequently, offline task scheduling among edge computing-enabled entities is performed on the set of all SOM neurons using the Hungarian method to determine the expected optimal task assignments. In real-time context, whenever a task arrives at the infrastructure, the expected optimal assignment for the task is scheduled to the appropriate edge computing-enabled entity. Numerical simulation results show that the proposed PIOTS mechanism overcomes existing solutions in terms of computation performance and service capability
RETE-ADH: An Improvement to RETE for Composite Context-Aware Service
We propose a new pattern matching algorithm for composite context-aware services. The new algorithm, RETE-ADH, extends RETE to enhance systems that are based on the composite context-aware service architecture. RETE-ADH increases the speed of matching by searching only a subset of the rules that can be matched. In addition, RETE-ADH is scalable and suitable for parallelization. We describe the design of the proposed algorithm and present experimental results from a simulated smart office environment to compare the proposed algorithm with other pattern matching algorithms, showing that the proposed algorithm outperforms original RETE by 85%